26 research outputs found

    Digital phenotyping and genotype-to-phenotype (G2P) models to predict complex traits in cereal crops

    Get PDF
    The revolution in digital phenotyping combined with the new layers of omics and envirotyping tools offers great promise to improve selection and accelerate genetic gains for crop improvement. This chapter examines the latest methods involving digital phenotyping tools to predict complex traits in cereals crops. The chapter has two parts. In the first part, entitled “Digital phenotyping as a tool to support breeding programs”, the secondary phenotypes measured by high-throughput plant phenotyping that are potentially useful for breeding are reviewed. In the second part, “Implementing complex G2P models in breeding programs”, the integration of data from digital phenotyping into genotype to phenotype (G2P) models to improve the prediction of complex traits using genomic information is discussed. The current status of statistical models to incorporate secondary traits in univariate and multivariate models, as well as how to better handle longitudinal (for example light interception, biomass accumulation, canopy height) traits, is reviewe

    A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery

    Get PDF
    (1) Background: Information rich hyperspectral sensing, together with robust image analysis, is providing new research pathways in plant phenotyping. This combination facilitates the acquisition of spectral signatures of individual plant organs as well as providing detailed information about the physiological status of plants. Despite the advances in hyperspectral technology in field-based plant phenotyping, little is known about the characteristic spectral signatures of shaded and sunlit components in wheat canopies. Non-imaging hyperspectral sensors cannot provide spatial information; thus, they are not able to distinguish the spectral reflectance differences between canopy components. On the other hand, the rapid development of high-resolution imaging spectroscopy sensors opens new opportunities to investigate the reflectance spectra of individual plant organs which lead to the understanding of canopy biophysical and chemical characteristics. (2) Method: This study reports the development of a computer vision pipeline to analyze ground-acquired imaging spectrometry with high spatial and spectral resolutions for plant phenotyping. The work focuses on the critical steps in the image analysis pipeline from pre-processing to the classification of hyperspectral images. In this paper, two convolutional neural networks (CNN) are employed to automatically map wheat canopy components in shaded and sunlit regions and to determine their specific spectral signatures. The first method uses pixel vectors of the full spectral features as inputs to the CNN model and the second method integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with the CNN to increase the feature discrimination and improves computational efficiency. (3) Results: The proposed technique alleviates the limitations and lack of separability inherent in existing pre-defined hyperspectral classification methods. It optimizes the use of hyperspectral imaging and ensures that the data provide information about the spectral characteristics of the targeted plant organs, rather than the background. We demonstrated that high-resolution hyperspectral imagery along with the proposed CNN model can be powerful tools for characterizing sunlit and shaded components of wheat canopies in the field. The presented method will provide significant advances in the determination and relevance of spectral properties of shaded and sunlit canopy components under natural light conditions

    DeepCount: In-Field Automatic Quantification of Wheat Spikes Using Simple Linear Iterative Clustering and Deep Convolutional Neural Networks

    Get PDF
    Crop yield is an essential measure for breeders, researchers and farmers and is comprised of and may be calculated by the number of ears/m2, grains per ear and thousand grain weight. Manual wheat ear counting, required in breeding programmes to evaluate crop yield potential, is labour intensive and expensive; thus, the development of a real-time wheat head counting system would be a significant advancement. In this paper, we propose a computationally efficient system called DeepCount to automatically identify and count the number of wheat spikes in digital images taken under the natural fields conditions. The proposed method tackles wheat spike quantification by segmenting an image into superpixels using Simple Linear Iterative Clustering (SLIC), deriving canopy relevant features, and then constructing a rational feature model fed into the deep Convolutional Neural Network (CNN) classification for semantic segmentation of wheat spikes. As the method is based on a deep learning model, it replaces hand-engineered features required for traditional machine learning methods with more efficient algorithms. The method is tested on digital images taken directly in the field at different stages of ear emergence/maturity (using visually different wheat varieties), with different canopy complexities (achieved through varying nitrogen inputs), and different heights above the canopy under varying environmental conditions. In addition, the proposed technique is compared with a wheat ear counting method based on a previously developed edge detection technique and morphological analysis. The proposed approach is validated with image-based ear counting and ground-based measurements. The results demonstrate that the DeepCount technique has a high level of robustness regardless of variables such as growth stage and weather conditions, hence demonstrating the feasibility of the approach in real scenarios. The system is a leap towards a portable and smartphone assisted wheat ear counting systems, results in reducing the labour involved and is suitable for high-throughput analysis. It may also be adapted to work on RGB images acquired from UAVs

    Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping

    Get PDF
    Background Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. Results In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. Conclusion The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc

    Automated method to determine two critical growth stages of wheat: heading and flowering

    Get PDF
    Recording growth stage information is an important aspect of precision agriculture, crop breeding and phenotyping. In practice, crop growth stage is still primarily monitored by-eye, which is not only laborious and time-consuming, but also subjective and error-prone. The application of computer vision on digital images offers a high-throughput and non-invasive alternative to manual observations and its use in agriculture and high-throughput phenotyping is increasing. This paper presents an automated method to detect wheat heading and flowering stages, which uses the application of computer vision on digital images. The bag-of-visual-word technique is used to identify the growth stage during heading and flowering within digital images. Scale invariant feature transformation feature extraction technique is used for lower level feature extraction; subsequently, local linear constraint coding and spatial pyramid matching are developed in the mid-level representation stage. At the end, support vector machine classification is used to train and test the data samples. The method outperformed existing algorithms, having yielded 95.24, 97.79, 99.59% at early, medium and late stages of heading, respectively and 85.45% accuracy for flowering detection. The results also illustrate that the proposed method is robust enough to handle complex environmental changes (illumination, occlusion). Although the proposed method is applied only on identifying growth stage in wheat, there is potential for application to other crops and categorization concepts, such as disease classification

    Field Scanalyzer: An automated robotic ïŹeld phenotyping platform for detailed crop monitoring

    Get PDF
    Current approaches to field phenotyping are laborious or permit the use of only a few sensors at a time. In an effort to overcome this, a fully automated robotic field phenotyping platform with a dedicated sensor array that may be accurately positioned in three dimensions and mounted on fixed rails has been established, to facilitate continual and high-throughput monitoring of crop performance. Employed sensors comprise of high-resolution visible, chlorophyll fluorescence and thermal infrared cameras, two hyperspectral imagers and dual 3D laser scanners. The sensor array facilitates specific growth measurements and identification of key growth stages with dense temporal and spectral resolution. Together, this platform produces a detailed description of canopy development across the crops entire lifecycle, with a high-degree of accuracy and reproducibility

    Functional QTL mapping and genomic prediction of canopy height in wheat measured using a robotic field phenotyping platform

    Get PDF
    Genetic studies increasingly rely on high-throughput phenotyping, but the resulting longitudinal data pose analytical challenges. We used canopy height data from an automated field phenotyping platform to compare several approaches to scanning for quantitative trait loci (QTLs) and performing genomic prediction in a wheat recombinant inbred line mapping population based on up to 26 sampled time points (TPs). We detected four persistent QTLs (i.e. expressed for most of the growing season), with both empirical and simulation analyses demonstrating superior statistical power of detecting such QTLs through functional mapping approaches compared with conventional individual TP analyses. In contrast, even very simple individual TP approaches (e.g. interval mapping) had superior detection power for transient QTLs (i.e. expressed during very short periods). Using spline-smoothed phenotypic data resulted in improved genomic predictive abilities (5–8% higher than individual TP prediction), while the effect of including significant QTLs in prediction models was relatively minor (<1–4% improvement). Finally, although QTL detection power and predictive ability generally increased with the number of TPs analysed, gains beyond five or 10 TPs chosen based on phenological information had little practical significance. These results will inform the development of an integrated, semi-automated analytical pipeline, which will be more broadly applicable to similar data sets in wheat and other crops

    A Range of Earth Observation Techniques for Assessing Plant Diversity

    Get PDF
    AbstractVegetation diversity and health is multidimensional and only partially understood due to its complexity. So far there is no single monitoring approach that can sufficiently assess and predict vegetation health and resilience. To gain a better understanding of the different remote sensing (RS) approaches that are available, this chapter reviews the range of Earth observation (EO) platforms, sensors, and techniques for assessing vegetation diversity. Platforms include close-range EO platforms, spectral laboratories, plant phenomics facilities, ecotrons, wireless sensor networks (WSNs), towers, air- and spaceborne EO platforms, and unmanned aerial systems (UAS). Sensors include spectrometers, optical imaging systems, Light Detection and Ranging (LiDAR), and radar. Applications and approaches to vegetation diversity modeling and mapping with air- and spaceborne EO data are also presented. The chapter concludes with recommendations for the future direction of monitoring vegetation diversity using RS

    Contribution de l'imagerie infrarouge thermique infrarouge à haute résolution à un phénotypage sur site à haut débit d'une progéniture de pomme soumise à des contraintes hydriques

    No full text
    International audienceThe genetic variability of fruit trees in response to drought stress is scarcely studied. As adaptation of scion cultivars to abiotic constraints constitutes a new challenge for fruit production, in particular where water scarcity is likely to occur, development of high-throughput phenotyping strategies applicable in the field to assess the tree response to soil drought among large populations is needed, overcoming the limitations of usual in-planta measurements. In this research, remotely sensed images were acquired by ultra-light aircraft (ULA) and an unmanned aerial vehicle (UAV) during 4 years in a field trial where an apple progeny (122 hybrids) was studied under contrasted summer irrigation regimes. Ortho-images were simultaneously acquired in visible (RGB), near-infrared (NIR) and thermalinfrared (TIR) bands. After rthorectification, georeferencing and mosaicking, RGB and NIR images were used to compute different vegetation indices over the field trial, while TIR imaging allowed extraction of the vegetation surface temperature (Ts), which was calibrated at ground level by using hot and cold reference targets. The Morans'f water deficit index (WDI), which combines the surface minus air temperature (Ts-Ta) and the normalized difference vegetation index (NDVI), was used as a stress phenotypic variable. WDI estimates the ratio of actual to maximal evapotranspiration (WDI=1.ETact/ETmax) in discontinuous plant covers. Like the Ts-Ta variable, it significantly discriminated the tree water statuses and genotypes. On the basis of different plant- and image-based indices, individual tree behaviour trends (isohydric vs. anisohydric) can be distinguished among the progeny, irrespective of tree vigour. This opens potential applications for plant breeding, and genetic bases of apple tree response to water stress are currently investigated through quantitative trait locus (QTL) detection. Making use of ULA with flights performed at 40-60 m altitude made it possible to strongly improve the TIR image resolution ('10 cm) and to limit the number of vegetation/soil mixed pixels. However, it will require careful image posttreatment, possibly including classification and/or segmentation
    corecore